Black-Box α-Divergence Minimization

نویسندگان

  • José Miguel Hernández-Lobato
  • Yingzhen Li
  • Mark Rowland
  • Daniel Hernández-Lobato
  • Thang D. Bui
  • Richard E. Turner
چکیده

Black-box alpha (BB-α) is a new approximate inference method based on the minimization of α-divergences. BB-α scales to large datasets because it can be implemented using stochastic gradient descent. BB-α can be applied to complex probabilistic models with little effort since it only requires as input the likelihood function and its gradients. These gradients can be easily obtained using automatic differentiation. By changing the divergence parameterα, the method is able to interpolate between variational Bayes (VB) (α→ 0) and an algorithm similar to expectation propagation (EP) (α = 1). Experiments on probit regression and neural network regression and classification problems show that BB-α with non-standard settings of α, such as α = 0.5, usually produces better predictions than with α→ 0 (VB) or α = 1 (EP).

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Black-box α-divergence for Deep Generative Models

We propose using the black-box α-divergence [1] as a flexible alternative to variational inference in deep generative models. By simply switching the objective function from the variational free-energy to the black-box α-divergence objective we are able to learn better generative models, which is demonstrated by a considerable improvement of the test log-likelihood in several preliminary experi...

متن کامل

Black-Box α-Divergence Minimization: Supplementary

This section revisits the original EP algorithm as a min-max optimization problem. Recall in the main text that we approximate the true posterior distribution p(θ|D) with a distribution in exponential family form given by q(θ) ∝ exp{s(θ)λq}. Now we define a set of unnormalized cavity distributions q\n(θ) = exp{s(θ)λ\n} for every data point xn. Then according to [Minka, 2001], the EP energy func...

متن کامل

Variational Inference via \chi Upper Bound Minimization

Variational inference enables Bayesian analysis for complex probabilistic models with massive data sets. It works by positing a family of distributions and finding the member in the family that is closest to the posterior. While successful, variational methods can run into pathologies; for example, they typically underestimate posterior uncertainty. We propose chi-vi, a complementary algorithm ...

متن کامل

Variational Inference via Upper Bound Minimization

Variational inference (VI) is widely used as an efficient alternative to Markovchain Monte Carlo. It posits a family of approximating distributions q and findsthe closest member to the exact posterior p. Closeness is usually measured via adivergence D(q||p) from q to p. While successful, this approach also has problems.Notably, it typically leads to underestimation of the poster...

متن کامل

Test Selection, Minimization, and Prioritization for Regression Testing

The purpose of this chapter is to introduce techniques for the selection, minimization, and prioritization of tests for regression testing. The source T from which tests are to be selected is likely derived using a combination of black-box and white-box techniques and used for system or component testing. However, when this system or component is modified, for whatever reason, one might be able...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015